Code
library(tidymodels)
library(tidyverse)
library(janitor)
library(naniar)
library(assertr)
library(corrplot)
library(gridExtra)
## Turn off scientific notation for readable numbers
options(scipen = 999)library(tidymodels)
library(tidyverse)
library(janitor)
library(naniar)
library(assertr)
library(corrplot)
library(gridExtra)
## Turn off scientific notation for readable numbers
options(scipen = 999)Filler out Missing Values in our key predicted outcome as this will cause our predicted models to fail, it provides no addtiaional data to our model. However for missingness in our predictors, imputation will help ensure non-systemic missingness adversely effects the sample size of our model.
## Read in the CSV file
remittances <- read_csv("../data/remittances.csv") |>
filter(!is.na(remittances_gdp))## Set seed for reproducibility
set.seed(20251211)
## Split data: 80% training, 20% testing
remit_split <- initial_split(data = remittances, prop = 0.8)
remit_train <- training(x = remit_split)
remit_test <- testing(x = remit_split)## View the first few rows and column types
glimpse(remit_train)Rows: 3,292
Columns: 19
$ `Country Name` <chr> "Germany", "Brazil", "Namibia", "Mexico", "Tanzania…
$ year <dbl> 2004, 2012, 2009, 2000, 2023, 2003, 2019, 2022, 201…
$ remittances <dbl> 6515541641, 2784072055, 76511275, 7524742980, 75881…
$ remittances_gdp <dbl> 0.22842973, 0.11293366, 0.85594119, 1.01403249, 0.9…
$ `Country Code` <chr> "DEU", "BRA", "NAM", "MEX", "TZA", "NAM", "CHE", "L…
$ gdp <dbl> 2852317768062, 2465227802807, 8938847189, 742061329…
$ stock <dbl> 1093030.29, 323363.20, NA, 8072288.08, NA, NA, 2336…
$ unemployment <dbl> 10.727, 7.251, 22.254, 2.646, 2.582, 22.052, 4.394,…
$ gdp_per <dbl> 34566.7359, 12521.7213, 4302.9137, 7524.0271, 1224.…
$ inflation <dbl> 1.1550593, 7.9431269, 6.9454043, 10.6979646, 2.7482…
$ vulnerable_emp <dbl> 7.044898, 25.346971, 28.026294, 31.785613, 83.87807…
$ maternal_mortality <dbl> 6, 60, 547, 56, 276, 290, 6, 477, 68, 101, 41, 842,…
$ exchange_rate <dbl> 0.8039216, 1.9530686, 8.5228198, 9.4555583, 2383.04…
$ deportations <dbl> 91, 639, 1, 44564, 19, 1, 4, 1, 4, NA, 53, 63, 533,…
$ internet <dbl> 64.73000, 48.56000, 6.50000, 5.08138, 29.06380, 3.3…
$ poverty <dbl> 0.0, 6.4, 38.6, 16.3, NA, 47.6, 0.0, NA, NA, 38.2, …
$ dist_pop <dbl> 6035.334, 7694.307, 11720.190, 3369.053, NA, 11720.…
$ dist_cap <dbl> 6717.542, 6794.436, 11908.000, 3037.916, NA, 11908.…
$ terror <dbl> 2, 4, 2, 3, NA, 3, 1, 3, 2, 2, 2, 3, 2, 2, 2, 2, 4,…
The training data has 3,918 rows and 19 columns. We can see:
## Get min, max, mean, median for all variables
summary(remit_train) Country Name year remittances remittances_gdp
Length:3292 Min. :1994 Min. : 6038 Min. : 0.000029
Class :character 1st Qu.:2002 1st Qu.: 74764744 1st Qu.: 0.366894
Mode :character Median :2010 Median : 545314436 Median : 1.608458
Mean :2010 Mean : 2655647852 Mean : 4.070970
3rd Qu.:2017 3rd Qu.: 2042426540 3rd Qu.: 4.620723
Max. :2024 Max. :137674533896 Max. :108.402724
Country Code gdp stock
Length:3292 Min. : 37184925 Min. : 259.6
Class :character 1st Qu.: 6495301634 1st Qu.: 39746.2
Mode :character Median : 25703593810 Median : 97162.5
Mean : 303934455901 Mean : 388798.9
3rd Qu.: 168683410868 3rd Qu.: 260319.4
Max. :18316765021700 Max. :23126089.8
NA's :1460
unemployment gdp_per inflation vulnerable_emp
Min. : 0.100 Min. : 109.6 Min. : -32.741 Min. : 0.1257
1st Qu.: 3.829 1st Qu.: 1356.2 1st Qu.: 1.834 1st Qu.:13.8649
Median : 6.232 Median : 4324.5 Median : 4.197 Median :33.2140
Mean : 7.820 Mean : 12367.7 Mean : 12.021 Mean :38.8326
3rd Qu.:10.491 3rd Qu.: 14711.9 3rd Qu.: 8.589 3rd Qu.:61.1256
Max. :34.007 Max. :138935.0 Max. :4800.532 Max. :94.7169
NA's :167 NA's :32 NA's :267
maternal_mortality exchange_rate deportations internet
Min. : 1.0 Min. : 0.0004 Min. : 0.0 Min. : 0.00
1st Qu.: 14.0 1st Qu.: 1.7617 1st Qu.: 7.0 1st Qu.: 3.53
Median : 61.0 Median : 8.2770 Median : 34.0 Median : 23.00
Mean : 172.7 Mean : 390.1688 Mean : 989.7 Mean : 33.84
3rd Qu.: 225.5 3rd Qu.: 116.3786 3rd Qu.: 134.0 3rd Qu.: 62.60
Max. :5721.0 Max. :15236.8847 Max. :90504.0 Max. :100.00
NA's :153 NA's :34 NA's :360 NA's :223
poverty dist_pop dist_cap terror
Min. : 0.00 Min. : 548.4 Min. : 737 Min. :1.000
1st Qu.: 0.25 1st Qu.: 6035.3 1st Qu.: 6274 1st Qu.:1.000
Median : 1.50 Median : 7873.0 Median : 8081 Median :2.000
Mean :10.33 Mean : 8471.3 Mean : 8618 Mean :2.345
3rd Qu.:11.55 3rd Qu.:11381.9 3rd Qu.:11674 3rd Qu.:3.000
Max. :89.40 Max. :16180.3 Max. :16371 Max. :5.000
NA's :1949 NA's :145 NA's :145 NA's :148
Key observations:
## Detailed structure of the data
str(remit_train)spc_tbl_ [3,292 × 19] (S3: spec_tbl_df/tbl_df/tbl/data.frame)
$ Country Name : chr [1:3292] "Germany" "Brazil" "Namibia" "Mexico" ...
$ year : num [1:3292] 2004 2012 2009 2000 2023 ...
$ remittances : num [1:3292] 6515541641 2784072055 76511275 7524742980 758814528 ...
$ remittances_gdp : num [1:3292] 0.228 0.113 0.856 1.014 0.96 ...
$ Country Code : chr [1:3292] "DEU" "BRA" "NAM" "MEX" ...
$ gdp : num [1:3292] 2852317768062 2465227802807 8938847189 742061329749 79062403837 ...
$ stock : num [1:3292] 1093030 323363 NA 8072288 NA ...
$ unemployment : num [1:3292] 10.73 7.25 22.25 2.65 2.58 ...
$ gdp_per : num [1:3292] 34567 12522 4303 7524 1224 ...
$ inflation : num [1:3292] 1.16 7.94 6.95 10.7 2.75 ...
$ vulnerable_emp : num [1:3292] 7.04 25.35 28.03 31.79 83.88 ...
$ maternal_mortality: num [1:3292] 6 60 547 56 276 290 6 477 68 101 ...
$ exchange_rate : num [1:3292] 0.804 1.953 8.523 9.456 2383.043 ...
$ deportations : num [1:3292] 91 639 1 44564 19 ...
$ internet : num [1:3292] 64.73 48.56 6.5 5.08 29.06 ...
$ poverty : num [1:3292] 0 6.4 38.6 16.3 NA 47.6 0 NA NA 38.2 ...
$ dist_pop : num [1:3292] 6035 7694 11720 3369 NA ...
$ dist_cap : num [1:3292] 6718 6794 11908 3038 NA ...
$ terror : num [1:3292] 2 4 2 3 NA 3 1 3 2 2 ...
- attr(*, "spec")=
.. cols(
.. `Country Name` = col_character(),
.. year = col_double(),
.. remittances = col_double(),
.. remittances_gdp = col_double(),
.. `Country Code` = col_character(),
.. gdp = col_double(),
.. stock = col_double(),
.. unemployment = col_double(),
.. gdp_per = col_double(),
.. inflation = col_double(),
.. vulnerable_emp = col_double(),
.. maternal_mortality = col_double(),
.. exchange_rate = col_double(),
.. deportations = col_double(),
.. internet = col_double(),
.. poverty = col_double(),
.. dist_pop = col_double(),
.. dist_cap = col_double(),
.. terror = col_double()
.. )
- attr(*, "problems")=<externalptr>
All numeric variables are stored as num or dbl (double precision), and text variables are stored as chr (character).
## Check current column names
names(remit_train) [1] "Country Name" "year" "remittances"
[4] "remittances_gdp" "Country Code" "gdp"
[7] "stock" "unemployment" "gdp_per"
[10] "inflation" "vulnerable_emp" "maternal_mortality"
[13] "exchange_rate" "deportations" "internet"
[16] "poverty" "dist_pop" "dist_cap"
[19] "terror"
Some column names have spaces and capital letters.
## Convert to lowercase with underscores
remit_train <- remit_train |>
clean_names()
## Verify the cleaned names
names(remit_train) [1] "country_name" "year" "remittances"
[4] "remittances_gdp" "country_code" "gdp"
[7] "stock" "unemployment" "gdp_per"
[10] "inflation" "vulnerable_emp" "maternal_mortality"
[13] "exchange_rate" "deportations" "internet"
[16] "poverty" "dist_pop" "dist_cap"
[19] "terror"
Now all column names are lowercase with underscores.
## Count NA values in each column
colSums(is.na(remit_train)) country_name year remittances remittances_gdp
0 0 0 0
country_code gdp stock unemployment
0 0 1460 167
gdp_per inflation vulnerable_emp maternal_mortality
0 32 267 153
exchange_rate deportations internet poverty
34 360 223 1949
dist_pop dist_cap terror
145 145 148
Several variables have missing data. Let’s calculate the percentage!
## Calculate percent missing for each variable
remit_train |>
summarise(across(everything(), ~sum(is.na(.)) / n() * 100))# A tibble: 1 × 19
country_name year remittances remittances_gdp country_code gdp stock
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 0 0 0 0 0 0 44.3
# ℹ 12 more variables: unemployment <dbl>, gdp_per <dbl>, inflation <dbl>,
# vulnerable_emp <dbl>, maternal_mortality <dbl>, exchange_rate <dbl>,
# deportations <dbl>, internet <dbl>, poverty <dbl>, dist_pop <dbl>,
# dist_cap <dbl>, terror <dbl>
Major findings:
## Get min, max, mean, median, quartiles
summary(remit_train$remittances) Min. 1st Qu. Median Mean 3rd Qu. Max.
6038 74764744 545314436 2655647852 2042426540 137674533896
The mean ($2.55 billion) is much larger than the median ($529 million). The data is right-skewed.
## Create a point plot (from class notes Section 5.6.1)
remit_train |>
select(remittances, remittances_gdp, gdp, stock, unemployment,deportations) |>
pivot_longer(everything()) |>
ggplot(aes(value)) +
geom_histogram(bins = 30) +
facet_wrap(~name, scales = "free") +
theme_minimal()Warning: Removed 1987 rows containing non-finite outside the scale range
(`stat_bin()`).
#CAVEAT: I could not 'clean' the x axis (log would be the way to solve it but
#we want to show how skewed the distribution is)## Create a point plot (from class notes Section 5.6.1)
remit_train |>
ggplot(aes(remittances, 1)) +
geom_point(alpha = 0.2) +
scale_y_continuous(breaks = 0) +
labs(y = NULL, title = "Distribution of Remittances") +
theme_bw() +
theme(panel.border = element_blank())Most points cluster on the left (lower values) with a few extreme points on the right (confirms right-skewness).
## Create histogram with 30 bins
remit_train |>
ggplot(aes(x = remittances)) +
geom_histogram(bins = 30, fill = "steelblue") +
theme_minimal() +
labs(title = "Distribution of Remittances",
x = "Remittances (USD)",
y = "Count")The histogram is heavily concentrated on the left with a long tail to the right. This is classic right-skewed data.
## Create boxplot to see outliers
remit_train |>
ggplot(aes(y = remittances)) +
geom_boxplot(fill = "steelblue") +
theme_minimal() +
labs(title = "Boxplot of Remittances",
y = "Remittances (USD)")Many points appear above the upper whisker (outliers). These are likely large countries like Mexico that receive billions in remittances.
## Get summary statistics
summary(remit_train$gdp) Min. 1st Qu. Median Mean 3rd Qu.
37184925 6495301634 25703593810 303934455901 168683410868
Max.
18316765021690
GDP also shows huge range - from $21 million to $18.7 trillion.
## Create point plot for GDP
remit_train |>
ggplot(aes(gdp, 1)) +
geom_point(alpha = 0.2) +
scale_y_continuous(breaks = 0) +
labs(y = NULL, title = "Distribution of GDP") +
theme_bw() +
theme(panel.border = element_blank())GDP shows the same right-skewed pattern as remittances. Large economies have much higher GDP than small economies.
## Get summary statistics
summary(remit_train$unemployment) Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
0.100 3.829 6.232 7.820 10.491 34.007 167
Unemployment ranges from 0.11% to 34.01%.
## Create point plot for unemployment
remit_train |>
ggplot(aes(unemployment, 1)) +
geom_point(alpha = 0.2) +
scale_y_continuous(breaks = 0) +
labs(y = NULL, title = "Distribution of Unemployment") +
theme_bw() +
theme(panel.border = element_blank())Warning: Removed 167 rows containing missing values or values outside the scale range
(`geom_point()`).
Unemployment appears more evenly distributed than remittances or GDP.
## Get summary statistics
summary(remit_train$inflation) Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
-32.741 1.834 4.197 12.021 8.589 4800.532 32
The maximum inflation is 6,041%! (outlier) Most inflation values are between 1.7% and 9%,
## Create point plot for inflation
remit_train |>
ggplot(aes(inflation, 1)) +
geom_point(alpha = 0.2) +
scale_y_continuous(breaks = 0) +
labs(y = NULL, title = "Distribution of Inflation") +
theme_bw() +
theme(panel.border = element_blank())Warning: Removed 32 rows containing missing values or values outside the scale range
(`geom_point()`).
## Test that remittances > 0 when not missing
remit_train |>
filter(!is.na(remittances)) |>
verify(remittances > 0) |>
summarise(mean_remittances = mean(remittances, na.rm = TRUE))# A tibble: 1 × 1
mean_remittances
<dbl>
1 2655647852.
All remittances are positive. The mean is $2.55 billion.
## Test that years are between 1994 and 2024
remit_train |>
verify(year >= 1994 & year <= 2024) |>
summarise(mean_year = mean(year))# A tibble: 1 × 1
mean_year
<dbl>
1 2010.
All years are within the expected range.
## Test that unemployment is between 0 and 100
remit_train |>
filter(!is.na(unemployment)) |>
verify(unemployment >= 0 & unemployment <= 100) |>
summarise(mean_unemployment = mean(unemployment, na.rm = TRUE))# A tibble: 1 × 1
mean_unemployment
<dbl>
1 7.82
All unemployment values are valid percentages (0-100%).
Since remittances and GDP are highly right-skewed, we need to create and examine log.
## Create log-transformed versions of skewed variables
remit_train <- remit_train |>
mutate(
log_remittances = log(remittances + 1),
log_gdp = log(gdp + 1)
)We add 1 before taking the log to handle any zero values (log(0) is undefined).
## Get summary statistics for log remittances
summary(remit_train$log_remittances) Min. 1st Qu. Median Mean 3rd Qu. Max.
8.706 18.130 20.117 19.744 21.437 25.648
## Create histogram for log-transformed remittances
remit_train |>
filter(!is.na(log_remittances)) |>
ggplot(aes(x = log_remittances)) +
geom_histogram(bins = 30, fill = "darkgreen", color = "white") +
theme_minimal() +
labs(title = "Distribution of Log-Transformed Remittances",
subtitle = "Much more normal distribution after log transformation",
x = "Log(Remittances + 1)",
y = "Count")The log-transformed remittances show a much more normal distribution compared to the original right-skewed data.
## Create boxplot for log remittances
remit_train |>
filter(!is.na(log_remittances)) |>
ggplot(aes(y = log_remittances)) +
geom_boxplot(fill = "darkgreen") +
theme_minimal() +
labs(title = "Boxplot of Log-Transformed Remittances",
y = "Log(Remittances + 1)")Fewer outliers visible after log transformation.
## Get summary statistics for log GDP
summary(remit_train$log_gdp) Min. 1st Qu. Median Mean 3rd Qu. Max.
17.43 22.59 23.97 24.13 25.85 30.54
## Create histogram for log-transformed GDP
remit_train |>
filter(!is.na(log_gdp)) |>
ggplot(aes(x = log_gdp)) +
geom_histogram(bins = 30, fill = "darkblue", color = "white") +
theme_minimal() +
labs(title = "Distribution of Log-Transformed GDP",
subtitle = "More normal distribution after log transformation",
x = "Log(GDP + 1)",
y = "Count")Log GDP also shows a more normal distribution.
## Create comparison plots
p1 <- remit_train |>
filter(!is.na(remittances)) |>
ggplot(aes(x = remittances)) +
geom_histogram(bins = 30, fill = "steelblue") +
theme_minimal() +
labs(title = "Original Remittances (Right-Skewed)",
x = "Remittances (USD)")
p2 <- remit_train |>
filter(!is.na(log_remittances)) |>
ggplot(aes(x = log_remittances)) +
geom_histogram(bins = 30, fill = "darkgreen") +
theme_minimal() +
labs(title = "Log-Transformed Remittances (More Normal)",
x = "Log(Remittances + 1)")
grid.arrange(p1, p2, ncol = 2)The log transformation successfully converts the right-skewed distribution into a more normal distribution, which is better for modeling.
## Create comparison plots for GDP
p3 <- remit_train |>
filter(!is.na(gdp)) |>
ggplot(aes(x = gdp)) +
geom_histogram(bins = 30, fill = "steelblue") +
theme_minimal() +
labs(title = "Original GDP (Right-Skewed)",
x = "GDP (USD)")
p4 <- remit_train |>
filter(!is.na(log_gdp)) |>
ggplot(aes(x = log_gdp)) +
geom_histogram(bins = 30, fill = "darkblue") +
theme_minimal() +
labs(title = "Log-Transformed GDP (More Normal)",
x = "Log(GDP + 1)")
grid.arrange(p3, p4, ncol = 2)## Scatter plot with log-transformed variables
remit_train |>
filter(!is.na(log_gdp), !is.na(log_remittances)) |>
ggplot(aes(x = log_gdp, y = log_remittances)) +
geom_point(alpha = 0.3, color = "steelblue") +
geom_smooth(method = "lm", se = TRUE, color = "red") +
theme_minimal() +
labs(title = "Log GDP vs Log Remittances",
subtitle = "Clearer linear relationship after log transformation",
x = "Log(GDP + 1)",
y = "Log(Remittances + 1)")`geom_smooth()` using formula = 'y ~ x'
The relationship between log GDP and log remittances is more linear than the original variables, which will improve model performance.
ggplot(remit_train, aes(x = deportations + 1, y = remittances + 1)) +
geom_point(alpha = 0.3, color = "steelblue") +
geom_smooth(method = "lm", se = TRUE, color = "red") +
scale_x_log10() +
scale_y_log10() +
labs(
title = "Log–Log Relationship Between Deportations and Remittances",
x = "Log(Deportations + 1)",
y = "Log(Remittances + 1)"
) +
theme_minimal()`geom_smooth()` using formula = 'y ~ x'
Warning: Removed 360 rows containing non-finite outside the scale range
(`stat_smooth()`).
Warning: Removed 360 rows containing missing values or values outside the scale range
(`geom_point()`).
## Count distinct country names
n_distinct(remit_train$country_name)[1] 151
We have 158 different countries in the dataset.
## Show how many observations per country
head(table(remit_train$country_name), 20)
Afghanistan Albania Algeria Angola
16 25 27 12
Antigua and Barbuda Argentina Armenia Australia
25 22 23 21
Austria Azerbaijan Bangladesh Barbados
22 21 22 25
Belarus Belgium Belize Benin
26 22 20 26
Bermuda Bhutan Bolivia Botswana
16 12 29 26
Most countries have between 20-30 observations, representing roughly 20-30 years of data.
## Count observations per country and plot top 20
remit_train |>
count(country_name, sort = TRUE) |>
slice_head(n = 20) |>
ggplot(aes(x = reorder(country_name, n), y = n)) +
geom_bar(stat = "identity", fill = "steelblue") +
coord_flip() +
theme_minimal() +
labs(title = "Top 20 Countries by Number of Observations",
x = "Country",
y = "Count")Countries are fairly evenly represented. Bahrain has the most observations (30), while several countries have around 20-29 observations.
remit_train %>%
filter(year == 2024) %>%
slice_max(remittances, n = 15) %>%
ggplot(aes(
x = fct_reorder(country_name, remittances),
y = remittances / 1e9
)) +
geom_col(fill = "steelblue") +
coord_flip() +
labs(
title = "Top 15 Remittance Receivers (2024)",
x = "Country",
y = "Remittances (Billions USD)"
) +
theme_minimal()# Top 15 by remittances/GDP (2024)
remit_train %>%
filter(year == 2024) %>%
slice_max(remittances_gdp, n = 15) %>% # preferred over top_n()
mutate(`Country Name` = fct_reorder(country_name, remittances_gdp)) %>%
ggplot(aes(x = `Country Name`, y = remittances_gdp)) +
geom_col(fill = "coral") +
coord_flip() +
labs(
title = "Top 15 Countries: Remittances as % GDP (2024)",
x = "Country",
y = "Remittances (% GDP)"
) +
theme_minimal()remit_train |>
group_by(country_name) |>
summarize(mean_ratio = mean(remittances_gdp, na.rm = TRUE)) |>
arrange(desc(mean_ratio)) |>
slice_head(n = 10)# A tibble: 10 × 2
country_name mean_ratio
<chr> <dbl>
1 Lesotho 42.9
2 Tonga 31.8
3 Bermuda 21.1
4 Nepal 20.2
5 Samoa 19.6
6 Lebanon 19.5
7 El Salvador 17.9
8 Kosovo 17.0
9 Jordan 16.1
10 Honduras 15.1
top10 <- remit_train |>
group_by(country_name) |>
summarize(mean_ratio = mean(remittances_gdp, na.rm = TRUE)) |>
arrange(desc(mean_ratio)) |>
slice_head(n = 10)
ggplot(top10, aes(x = reorder(country_name, mean_ratio), y = mean_ratio)) +
geom_col(fill = "coral") +
coord_flip() +
labs(
title = "Top 10 Countries by Average Remittances % of GDP",
x = "Country",
y = "Average Remittances/GDP"
) +
theme_minimal()remit_train |>
filter(country_name %in% c(
"Nicaragua", "El Salvador", "Honduras", "Guatemala", "Haiti", "India"
)) |>
ggplot(aes(x = year, y = remittances_gdp, color = country_name)) +
geom_line(linewidth = 1) +
scale_y_log10() +
labs(
title = "Remittance Trends Over Time",
subtitle = "Selected Countries (log scale)",
x = "Year",
y = "Remittances (% of GDP, log scale)",
color = "Country"
) +
theme_minimal()#Start assessing countries of interes
countries_of_interest <- c( "Nicaragua", "El Salvador", "Honduras", "Guatemala",
"Haiti", "India")
filtered <- remit_train |>
filter(country_name %in% countries_of_interest)
ggplot(filtered, aes(log(stock), log(remittances), color = country_name)) +
geom_point(alpha = 0.6) +
geom_smooth(method = "lm", se = FALSE) +
labs(
x = "log(stock)",
y = "log(remittances)",
title = "Stock–Remittance Relationship for Selected Countries",
color = "Country"
) +
theme_minimal()`geom_smooth()` using formula = 'y ~ x'
Warning: Removed 13 rows containing non-finite outside the scale range
(`stat_smooth()`).
Warning: Removed 13 rows containing missing values or values outside the scale range
(`geom_point()`).
#Comment: comparing the stock–Remittance Relationship for Selected Countries## Scatter plot with trend line
remit_train |>
ggplot(aes(x = gdp, y = remittances)) +
geom_point(alpha = 0.3, color = "steelblue") +
geom_smooth(method = "lm", se = TRUE, color = "red") +
theme_minimal() +
labs(title = "Relationship Between GDP and Remittances (Original Scale)",
x = "GDP (USD)",
y = "Remittances (USD)")`geom_smooth()` using formula = 'y ~ x'
There is a clear positive relationship. Countries with larger economies (higher GDP) tend to receive more remittances in absolute dollar amounts. The red line shows the linear trend.
## Scatter plot with log-transformed variables
remit_train |>
filter(!is.na(log_gdp), !is.na(log_remittances)) |>
ggplot(aes(x = log_gdp, y = log_remittances)) +
geom_point(alpha = 0.3, color = "darkgreen") +
geom_smooth(method = "lm", se = TRUE, color = "red") +
theme_minimal() +
labs(title = "Log GDP vs Log Remittances (Log Scale - Better Linear Fit)",
subtitle = "This relationship is more appropriate for linear regression models",
x = "Log(GDP + 1)",
y = "Log(Remittances + 1)")`geom_smooth()` using formula = 'y ~ x'
The log-transformed relationship is more linear and will produce better model predictions.
## Scatter plot with trend line
remit_train |>
ggplot(aes(x = gdp_per, y = remittances_gdp)) +
geom_point(alpha = 0.3, color = "steelblue") +
geom_smooth(method = "lm", se = TRUE, color = "red") +
theme_minimal() +
labs(title = "GDP Per Capita vs Remittances as % of GDP",
x = "GDP Per Capita (USD)",
y = "Remittances as % of GDP")`geom_smooth()` using formula = 'y ~ x'
Poorer countries (lower GDP per capita) depend more heavily on remittances as a percentage of their economy. Richer countries receive remittances but they represent a smaller share of their total GDP.
## Scatter plot with trend line
remit_train |>
ggplot(aes(x = unemployment, y = remittances)) +
geom_point(alpha = 0.3, color = "steelblue") +
geom_smooth(method = "lm", se = TRUE, color = "red") +
theme_minimal() +
labs(title = "Unemployment vs Remittances",
x = "Unemployment Rate (%)",
y = "Remittances (USD)")`geom_smooth()` using formula = 'y ~ x'
Warning: Removed 167 rows containing non-finite outside the scale range
(`stat_smooth()`).
Warning: Removed 167 rows containing missing values or values outside the scale range
(`geom_point()`).
There is a slight negative relationship (it is weak). Unemployment doesn’t appear to be a strong predictor of remittances.
## Calculate correlations between all numeric variables
remit_train |>
select(where(is.numeric)) |>
cor(use = "complete.obs") |>
round(2) year remittances remittances_gdp gdp stock unemployment
year 1.00 0.18 0.07 0.13 -0.01 -0.07
remittances 0.18 1.00 0.02 0.47 0.47 -0.12
remittances_gdp 0.07 0.02 1.00 -0.21 0.03 0.03
gdp 0.13 0.47 -0.21 1.00 0.18 -0.09
stock -0.01 0.47 0.03 0.18 1.00 -0.13
unemployment -0.07 -0.12 0.03 -0.09 -0.13 1.00
gdp_per 0.22 0.02 -0.38 0.19 -0.09 -0.07
inflation -0.13 -0.06 0.03 -0.12 0.00 -0.06
vulnerable_emp -0.08 0.05 0.39 -0.10 0.06 -0.12
maternal_mortality -0.10 0.04 0.11 -0.11 -0.01 -0.19
exchange_rate 0.03 0.05 -0.05 -0.03 -0.05 -0.09
deportations -0.09 0.28 0.23 0.01 0.76 -0.13
internet 0.68 0.06 -0.27 0.20 -0.08 0.02
poverty -0.28 0.01 0.18 -0.11 0.04 -0.12
dist_pop 0.06 0.08 -0.09 0.13 -0.17 -0.06
dist_cap 0.06 0.09 -0.11 0.13 -0.19 -0.04
terror -0.09 0.21 0.24 0.04 0.21 -0.12
log_remittances 0.26 0.69 0.13 0.36 0.30 -0.13
log_gdp 0.14 0.45 -0.59 0.60 0.19 -0.06
gdp_per inflation vulnerable_emp maternal_mortality
year 0.22 -0.13 -0.08 -0.10
remittances 0.02 -0.06 0.05 0.04
remittances_gdp -0.38 0.03 0.39 0.11
gdp 0.19 -0.12 -0.10 -0.11
stock -0.09 0.00 0.06 -0.01
unemployment -0.07 -0.06 -0.12 -0.19
gdp_per 1.00 -0.30 -0.63 -0.35
inflation -0.30 1.00 0.15 0.17
vulnerable_emp -0.63 0.15 1.00 0.65
maternal_mortality -0.35 0.17 0.65 1.00
exchange_rate -0.18 0.07 0.27 0.24
deportations -0.14 0.02 0.09 0.00
internet 0.72 -0.28 -0.60 -0.43
poverty -0.44 0.25 0.71 0.74
dist_pop -0.19 0.11 0.28 0.22
dist_cap -0.15 0.10 0.25 0.21
terror -0.58 0.27 0.57 0.39
log_remittances 0.11 -0.14 -0.01 -0.06
log_gdp 0.51 -0.16 -0.40 -0.23
exchange_rate deportations internet poverty dist_pop
year 0.03 -0.09 0.68 -0.28 0.06
remittances 0.05 0.28 0.06 0.01 0.08
remittances_gdp -0.05 0.23 -0.27 0.18 -0.09
gdp -0.03 0.01 0.20 -0.11 0.13
stock -0.05 0.76 -0.08 0.04 -0.17
unemployment -0.09 -0.13 0.02 -0.12 -0.06
gdp_per -0.18 -0.14 0.72 -0.44 -0.19
inflation 0.07 0.02 -0.28 0.25 0.11
vulnerable_emp 0.27 0.09 -0.60 0.71 0.28
maternal_mortality 0.24 0.00 -0.43 0.74 0.22
exchange_rate 1.00 -0.04 -0.16 0.29 0.39
deportations -0.04 1.00 -0.19 0.13 -0.23
internet -0.16 -0.19 1.00 -0.60 -0.14
poverty 0.29 0.13 -0.60 1.00 0.23
dist_pop 0.39 -0.23 -0.14 0.23 1.00
dist_cap 0.37 -0.26 -0.11 0.21 1.00
terror 0.20 0.21 -0.52 0.47 0.27
log_remittances 0.08 0.21 0.19 -0.06 0.03
log_gdp 0.04 0.00 0.43 -0.27 0.10
dist_cap terror log_remittances log_gdp
year 0.06 -0.09 0.26 0.14
remittances 0.09 0.21 0.69 0.45
remittances_gdp -0.11 0.24 0.13 -0.59
gdp 0.13 0.04 0.36 0.60
stock -0.19 0.21 0.30 0.19
unemployment -0.04 -0.12 -0.13 -0.06
gdp_per -0.15 -0.58 0.11 0.51
inflation 0.10 0.27 -0.14 -0.16
vulnerable_emp 0.25 0.57 -0.01 -0.40
maternal_mortality 0.21 0.39 -0.06 -0.23
exchange_rate 0.37 0.20 0.08 0.04
deportations -0.26 0.21 0.21 0.00
internet -0.11 -0.52 0.19 0.43
poverty 0.21 0.47 -0.06 -0.27
dist_pop 1.00 0.27 0.03 0.10
dist_cap 1.00 0.24 0.04 0.12
terror 0.24 1.00 0.20 -0.08
log_remittances 0.04 0.20 1.00 0.53
log_gdp 0.12 -0.08 0.53 1.00
## Create visual correlation matrix (Eva's example)
remit_train |>
select(where(is.numeric)) |>
cor(use = "complete.obs") |>
corrplot(method = "color",
type = "upper",
tl.col = "black",
tl.srt = 45,
title = "Correlation Matrix",
mar = c(0,0,2,0))## Create visual correlation matrix (Gaby's example)
library(reshape2)
Attaching package: 'reshape2'
The following object is masked from 'package:tidyr':
smiths
#Create variable to represent numeric vars
numeric_vars_log <- remit_train %>%
select(remittances_gdp, remittances, stock, unemployment, gdp_per,
inflation, internet, dist_cap, terror) %>%
mutate(
remittances_gdp = log1p(remittances_gdp),
remittances = log1p(remittances),
stock = log1p(stock), # migrant stock
gdp_per = log1p(gdp_per),
internet = log1p(internet),
dist_cap = log1p(dist_cap)
) %>%
na.omit()
cor_matrix_log <- cor(numeric_vars_log, use = "complete.obs")
melted_cor_log <- melt(cor_matrix_log)
ggplot(melted_cor_log, aes(Var1, Var2, fill = value)) +
geom_tile() +
geom_text(aes(label = round(value, 2)), size = 3) +
scale_fill_gradient2(
low = "blue", mid = "white", high = "red",
midpoint = 0, limits = c(-1, 1)
) +
labs(title = "Correlation Heatmap (Log-Transformed Variables)") +
theme_minimal() +
theme(axis.text.x = element_text(angle = 45, hjust = 1))## Calculate average remittances per year and plot
remit_train |>
group_by(year) |>
summarise(avg_remittances = mean(remittances, na.rm = TRUE)) |>
ggplot(aes(x = year, y = avg_remittances)) +
geom_line(color = "steelblue", linewidth = 1) +
geom_point(color = "steelblue") +
theme_minimal() +
labs(title = "Average Remittances Over Time (1994-2024)",
x = "Year",
y = "Average Remittances (USD)")Remittances show a clear upward trend over 30 years. We can see:
## Calculate average remittances as % of GDP per year and plot
remit_train |>
group_by(year) |>
summarise(avg_remittances_gdp = mean(remittances_gdp, na.rm = TRUE)) |>
ggplot(aes(x = year, y = avg_remittances_gdp)) +
geom_line(color = "steelblue", linewidth = 1) +
geom_point(color = "steelblue") +
theme_minimal() +
labs(title = "Average Remittances as % of GDP Over Time",
x = "Year",
y = "Remittances as % of GDP")Remittances as a percentage of GDP have stayed relatively stable around 3-4% over time. This means remittances are growing roughly in line with GDP growth, not becoming more or less important to economies over time.
It is likely that past changes to country of origin conditions is likely to be more illustrative of future remittances than current conditions. For example, while a downward shock in GDP may influence current migration, migrants may take time to settle into the US and thus begin remitting back home.
These lagged effect would seem to be the most plausible with GDP, unemployment, terror, deportations, and changes in migrant stock (inward migration)
## Lagged (1 year)
remit_lag <- remit_train |>
mutate(year = as.numeric(year)) |>
arrange(country_name, year) |>
group_by(country_name) |>
mutate(
gdp_lag = lag(gdp_per),
unemp_lag = lag(unemployment),
terror_lag = lag(terror),
deportations_lag = lag(deportations),
stock_lag = lag(stock),
) |>
ungroup()
# Verifying that the lag worked.
remit_lag |>
select(country_name, year, gdp_per, gdp_lag) |>
arrange(country_name, year) |>
filter(!is.na(gdp_lag)) |>
slice_head(n = 10)# A tibble: 10 × 4
country_name year gdp_per gdp_lag
<chr> <dbl> <dbl> <dbl>
1 Afghanistan 2009 452. 382.
2 Afghanistan 2010 561. 452.
3 Afghanistan 2011 607. 561.
4 Afghanistan 2012 651. 607.
5 Afghanistan 2013 637. 651.
6 Afghanistan 2014 625. 637.
7 Afghanistan 2015 566. 625.
8 Afghanistan 2016 522. 566.
9 Afghanistan 2017 525. 522.
10 Afghanistan 2018 491. 525.
## Lagged predictors relationship with remittances (as % of GDP)
## GDP per capita
# Lagged vs Unlagged
remit_lag |>
pivot_longer(cols = c(gdp_per, gdp_lag),
names_to = "type",
values_to = "value") |>
ggplot(aes(value, remittances_gdp, color = type)) +
geom_point(alpha = 0.3) +
geom_smooth(se = FALSE) +
theme_minimal() +
labs(title = "Lagged vs Current GDP per Capita",
x = "GDP per capita",
color = "Variable") ## Doesn't Necessarily Improve Model Fit. Overall there seem to be better ways to verify whether lagged variable would improve the interpretability of our models.
## Comparing Lagged vs Current GDP per capita for key countries.
remit_lag |>
filter(country_name %in% c("Nicaragua", "El Salvador", "Honduras",
"Guatemala", "Haiti", "India")) |>
pivot_longer(
cols = c(gdp_per, gdp_lag),
names_to = "gdp_type",
values_to = "gdp_value"
) |>
ggplot(aes(x = gdp_value, y = remittances_gdp,
color = country_name, linetype = gdp_type)) +
geom_point(alpha = 0.4) +
geom_smooth(method = "lm", se = FALSE) +
theme_minimal() +
labs(
title = "Lagged vs Current GDP per Capita",
x = "GDP per capita (current or lagged)",
linetype = "GDP variable"
)Suggestion is that Lagged GDP demonstrates a slightly stronger relationship and thus may improve model fit. Thus it may seem that shocks or changes to prior GDP could help explain current remittances amounts.
## Comparing Lagged vs Current Unemployment for key countries.
remit_lag |>
filter(country_name %in% c("Nicaragua", "El Salvador", "Honduras",
"Guatemala", "Haiti", "India")) |>
pivot_longer(
cols = c(unemployment, unemp_lag),
names_to = "unemp_type",
values_to = "unemp_value"
) |>
ggplot(aes(x = unemp_value, y = remittances_gdp,
color = country_name, linetype = unemp_type)) +
geom_point(alpha = 0.4) +
geom_smooth(method = "lm", se = FALSE) +
theme_minimal() +
labs(
title = "Lagged vs Current Unemployment",
x = "Unemployment (current or lagged)",
linetype = "GDP variable"
)For most countries lagged unemployment does not seem to alter model fit substantially for any country other than Haiti.
It likely won’t improve our model fit and thus shouldn’t be included.
## Comparing Lagged vs Current Terror for key countries.
remit_lag |>
filter(country_name %in% c("Nicaragua", "El Salvador", "Honduras",
"Guatemala", "Haiti", "India")) |>
pivot_longer(
cols = c(terror, terror_lag),
names_to = "terror_type",
values_to = "terror_value"
) |>
ggplot(aes(x = terror_value, y = remittances_gdp,
color = country_name, linetype = terror_type)) +
geom_point(alpha = 0.4) +
geom_smooth(method = "lm", se = FALSE) +
theme_minimal() +
labs(
title = "Lagged vs Current Terror",
x = "Terror (current or lagged)",
linetype = "Terror variable"
)It seems terror varies less, and lagged terror levels may not be too explanatory
## Comparing Lagged vs Current Deportations for key countries.
remit_lag |>
filter(country_name %in% c("Nicaragua", "El Salvador", "Honduras",
"Guatemala", "Haiti", "India")) |>
pivot_longer(
cols = c(deportations, deportations_lag),
names_to = "deportations_type",
values_to = "deportations_value"
) |>
ggplot(aes(x = deportations_value, y = remittances_gdp,
color = country_name, linetype = deportations_type)) +
geom_point(alpha = 0.4) +
geom_smooth(method = "lm", se = FALSE) +
theme_minimal() +
labs(
title = "Lagged vs Current Deportations",
x = "Deportations (current or lagged)",
linetype = "Deportations variable"
)Much stronger relationship for key countries in including the lagged effects of deportations in explaining future remittances.
## Comparing Lagged vs Current Deportations for key countries.
remit_lag |>
filter(country_name %in% c("Nicaragua", "El Salvador", "Honduras",
"Guatemala", "Haiti", "India")) |>
pivot_longer(
cols = c(stock, stock_lag),
names_to = "stock_type",
values_to = "stock_value"
) |>
ggplot(aes(x = stock_value, y = remittances_gdp,
color = country_name, linetype = stock_type)) +
geom_point(alpha = 0.4) +
geom_smooth(method = "lm", se = FALSE) +
theme_minimal() +
labs(
title = "Lagged vs Current Changes in Migrant Stock ",
x = "Migrant Stock (current or lagged)",
linetype = "Migrant Stock variable"
)Less Strong change and thus probably doesn’t warrant inclusion.
Takeaways:
Predictive power of lagged deportations and GDP improve model fit the best (shift the slopes of our relationships most).
For the other predictors, including changes to migrant stock, terror, and unemployment the relationships for our key variables barely changed indicating no changes in explanatory power.
Next Steps: using step_mutate in our recipe to add lags to our gdp_per and deportations would account for this.
## To improve the accuracy of our estimated error rates, we set up a 10-fold cross validation with 5 repetitions since we have a relatively small number of observations within the training data
remit_folds <- vfold_cv(data = remit_train, v = 10, repeats = 5)## Recipe (baseline model)
recipe_baseline <-
recipe(remittances_gdp ~ stock + gdp_per + unemployment + dist_cap + terror + deportations + internet + inflation,
data = remit_train) |>
step_impute_median(all_numeric_predictors()) |>
step_impute_mode(all_nominal_predictors())|>
step_mutate(
gdp_lag = lag(gdp_per),
unemp_lag = lag(unemployment)) |>
step_mutate(
gdp_per = log(gdp_per + 1)) |>
step_normalize(all_numeric_predictors())
## Processing the full training data using parameter specification.
bake(prep(recipe_baseline, training = remit_train), new_data = remit_train)# A tibble: 3,292 × 11
stock gdp_per unemployment dist_cap terror deportations internet inflation
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 0.830 1.34 0.552 -0.522 -0.300 -0.147 1.01 -0.102
2 0.0636 0.681 -0.0903 -0.500 1.52 -0.0455 0.495 -0.0379
3 -0.162 -0.0167 2.68 0.921 -0.300 -0.163 -0.852 -0.0473
4 7.78 0.348 -0.941 -1.54 0.610 8.07 -0.897 -0.0118
5 -0.162 -0.837 -0.952 -0.143 -0.300 -0.160 -0.130 -0.0870
6 -0.162 -0.353 2.64 0.921 0.610 -0.163 -0.952 -0.101
7 -0.235 1.92 -0.618 -0.553 -1.21 -0.163 1.92 -0.114
8 -0.162 -0.950 1.65 1.28 0.610 -0.163 0.429 -0.0607
9 -0.232 -0.262 -0.977 1.09 -0.300 -0.163 0.214 -0.0826
10 -0.0648 -1.05 -0.622 -1.57 -0.300 -0.157 -1.06 0.0591
# ℹ 3,282 more rows
# ℹ 3 more variables: remittances_gdp <dbl>, gdp_lag <dbl>, unemp_lag <dbl>
In this section, we compare five different regression models to predict remittances as a percentage of GDP.
## Linear model
lm_baseline <- linear_reg() |>
set_mode(mode = "regression") |>
set_engine(engine = "lm")
## Create workflow
lm_workflow <- workflow() |>
add_recipe(recipe_baseline) |>
add_model(lm_baseline)
## Fit model
lm_results <- lm_workflow |>
fit_resamples(resamples = remit_folds)
## Collect RMSE
collect_metrics(lm_results)# A tibble: 2 × 6
.metric .estimator mean n std_err .config
<chr> <chr> <dbl> <int> <dbl> <chr>
1 rmse standard 6.54 50 0.168 Preprocessor1_Model1
2 rsq standard 0.0831 50 0.00418 Preprocessor1_Model1
Establishes a baseline performance using ordinary least squares regression with no regularization. This gives us a benchmark to compare other models against.
library(tidymodels)
library(glmnet)Loading required package: Matrix
Attaching package: 'Matrix'
The following objects are masked from 'package:tidyr':
expand, pack, unpack
Loaded glmnet 4.1-10
library(dplyr)
#We clean remittances before folding due to missing values
train_data2 <- remit_train %>%
filter(!is.na(remittances_gdp))
remit_folds <- vfold_cv(train_data2, v = 10)
#At first, glmnet was throwing errors, so we need to create a recipe that forces
#your predictors into a form that ridge/lasso (glmnet) can handle, with no NA, no Inf / -Inf
#no constant columns, comparable scales across predictors.
recipe_glmnet <- recipe_baseline %>%
step_mutate(across(all_numeric_predictors(),
~ if_else(is.finite(.x), .x, NA_real_))) %>%
step_impute_median(all_numeric_predictors()) %>%
step_impute_mode(all_nominal_predictors()) %>%
step_zv(all_predictors()) %>%
step_normalize(all_numeric_predictors())
ctrl <- control_grid(save_pred = TRUE, verbose = TRUE)
grid30 <- grid_regular(penalty(), levels = 30)
metrics1 <- metric_set(rmse)
#This control object makes errors visible and traceable#this defines ridge regression with a tuned penalty.
ridge_spec <- linear_reg(penalty = tune(), mixture = 0) %>%
set_mode("regression") %>%
set_engine("glmnet")
# build the workflow (recipe + model)
ridge_wf <- workflow() %>%
add_recipe(recipe_glmnet) %>%
add_model(ridge_spec)
#We tune the penalty using cross-validation
ridge_res <- tune_grid(
ridge_wf,
resamples = remit_folds,
grid = grid30,
metrics = metrics1,
control = ctrl
)
best_ridge <- select_best(ridge_res, metric = "rmse")
final_ridge_wf <- finalize_workflow(ridge_wf, best_ridge)
ridge_fit <- fit(final_ridge_wf, data = train_data2)
# results
best_ridge# A tibble: 1 × 2
penalty .config
<dbl> <chr>
1 0.0000000001 Preprocessor1_Model01
final_ridge_wf══ Workflow ════════════════════════════════════════════════════════════════════
Preprocessor: Recipe
Model: linear_reg()
── Preprocessor ────────────────────────────────────────────────────────────────
10 Recipe Steps
• step_impute_median()
• step_impute_mode()
• step_mutate()
• step_mutate()
• step_normalize()
• step_mutate()
• step_impute_median()
• step_impute_mode()
• step_zv()
• step_normalize()
── Model ───────────────────────────────────────────────────────────────────────
Linear Regression Model Specification (regression)
Main Arguments:
penalty = 0.0000000001
mixture = 0
Computational engine: glmnet
ridge_fit ══ Workflow [trained] ══════════════════════════════════════════════════════════
Preprocessor: Recipe
Model: linear_reg()
── Preprocessor ────────────────────────────────────────────────────────────────
10 Recipe Steps
• step_impute_median()
• step_impute_mode()
• step_mutate()
• step_mutate()
• step_normalize()
• step_mutate()
• step_impute_median()
• step_impute_mode()
• step_zv()
• step_normalize()
── Model ───────────────────────────────────────────────────────────────────────
Call: glmnet::glmnet(x = maybe_matrix(x), y = y, family = "gaussian", alpha = ~0)
Df %Dev Lambda
1 10 0.00 1292.00
2 10 0.08 1177.00
3 10 0.08 1072.00
4 10 0.09 977.10
5 10 0.10 890.30
6 10 0.11 811.20
7 10 0.12 739.10
8 10 0.13 673.50
9 10 0.14 613.60
10 10 0.16 559.10
11 10 0.17 509.50
12 10 0.19 464.20
13 10 0.21 423.00
14 10 0.23 385.40
15 10 0.25 351.20
16 10 0.27 320.00
17 10 0.30 291.50
18 10 0.32 265.60
19 10 0.35 242.00
20 10 0.39 220.50
21 10 0.42 200.90
22 10 0.46 183.10
23 10 0.50 166.80
24 10 0.54 152.00
25 10 0.59 138.50
26 10 0.64 126.20
27 10 0.70 115.00
28 10 0.76 104.80
29 10 0.82 95.46
30 10 0.89 86.98
31 10 0.97 79.26
32 10 1.04 72.21
33 10 1.13 65.80
34 10 1.22 59.95
35 10 1.31 54.63
36 10 1.41 49.77
37 10 1.52 45.35
38 10 1.63 41.32
39 10 1.74 37.65
40 10 1.86 34.31
41 10 1.99 31.26
42 10 2.12 28.48
43 10 2.26 25.95
44 10 2.41 23.65
45 10 2.55 21.55
46 10 2.71 19.63
...
and 54 more lines.
The ridge regression regularization indicated increasing deviance as the penalty decreased, with cross-validation selecting a penalty effectively equal to zero. This suggests that the unpenalized linear model already provides an optimal fit for the data.
# we specify the LASSO model
lasso_spec <- linear_reg(penalty = tune(), mixture = 1) %>%
set_mode("regression") %>%
set_engine("glmnet")
# we build the workflow - preprocess to prevent leakage
lasso_wf <- workflow() %>%
add_recipe(recipe_glmnet) %>%
add_model(lasso_spec)
# Tune lambda penalty using cross-validation
lasso_res <- tune_grid(
lasso_wf,
resamples = remit_folds,
grid = grid30,
metrics = metrics1,
control = ctrl
)
# choose the best lambda (lowest RMSE)
best_lasso <- select_best(lasso_res, metric = "rmse")
# finalize the workflow
final_lasso_wf <- finalize_workflow(lasso_wf, best_lasso)
# fit the final LASSO model on all training data
lasso_fit <- fit(final_lasso_wf, data = train_data2)
best_lasso# A tibble: 1 × 2
penalty .config
<dbl> <chr>
1 0.0189 Preprocessor1_Model25
final_lasso_wf══ Workflow ════════════════════════════════════════════════════════════════════
Preprocessor: Recipe
Model: linear_reg()
── Preprocessor ────────────────────────────────────────────────────────────────
10 Recipe Steps
• step_impute_median()
• step_impute_mode()
• step_mutate()
• step_mutate()
• step_normalize()
• step_mutate()
• step_impute_median()
• step_impute_mode()
• step_zv()
• step_normalize()
── Model ───────────────────────────────────────────────────────────────────────
Linear Regression Model Specification (regression)
Main Arguments:
penalty = 0.018873918221351
mixture = 1
Computational engine: glmnet
lasso_fit══ Workflow [trained] ══════════════════════════════════════════════════════════
Preprocessor: Recipe
Model: linear_reg()
── Preprocessor ────────────────────────────────────────────────────────────────
10 Recipe Steps
• step_impute_median()
• step_impute_mode()
• step_mutate()
• step_mutate()
• step_normalize()
• step_mutate()
• step_impute_median()
• step_impute_mode()
• step_zv()
• step_normalize()
── Model ───────────────────────────────────────────────────────────────────────
Call: glmnet::glmnet(x = maybe_matrix(x), y = y, family = "gaussian", alpha = ~1)
Df %Dev Lambda
1 0 0.00 1.29200
2 1 0.59 1.17700
3 1 1.09 1.07200
4 1 1.49 0.97710
5 1 1.83 0.89030
6 1 2.12 0.81120
7 2 2.57 0.73910
8 2 2.96 0.67350
9 2 3.28 0.61360
10 2 3.55 0.55910
11 3 3.90 0.50950
12 3 4.21 0.46420
13 3 4.46 0.42300
14 3 4.67 0.38540
15 3 4.85 0.35120
16 4 5.04 0.32000
17 6 5.30 0.29150
18 6 5.63 0.26560
19 7 5.96 0.24200
20 8 6.40 0.22050
21 8 6.77 0.20090
22 8 7.08 0.18310
23 9 7.34 0.16680
24 9 7.56 0.15200
25 9 7.75 0.13850
26 9 7.90 0.12620
27 9 8.03 0.11500
28 9 8.14 0.10480
29 9 8.23 0.09546
30 9 8.30 0.08698
31 9 8.36 0.07926
32 9 8.41 0.07221
33 9 8.45 0.06580
34 9 8.49 0.05995
35 9 8.52 0.05463
36 9 8.54 0.04977
37 10 8.56 0.04535
38 10 8.58 0.04132
39 10 8.59 0.03765
40 10 8.60 0.03431
41 10 8.61 0.03126
42 10 8.62 0.02848
43 10 8.63 0.02595
44 10 8.63 0.02365
45 10 8.64 0.02155
46 10 8.64 0.01963
...
and 22 more lines.
tidy(lasso_fit) %>%
filter(term != "(Intercept)") %>%
filter(estimate != 0) %>%
arrange(desc(abs(estimate)))# A tibble: 10 × 3
term estimate penalty
<chr> <dbl> <dbl>
1 gdp_per -2.37 0.0189
2 deportations 1.43 0.0189
3 internet 0.904 0.0189
4 stock -0.811 0.0189
5 unemployment 0.672 0.0189
6 terror -0.567 0.0189
7 dist_cap -0.378 0.0189
8 unemp_lag 0.271 0.0189
9 inflation -0.155 0.0189
10 gdp_lag -0.0308 0.0189
# RMSE comparison
bind_rows(
collect_metrics(ridge_res) |>
filter(.metric == "rmse") |>
inner_join(best_ridge, by = "penalty") |>
mutate(model = "ridge"),
collect_metrics(lasso_res) |>
filter(.metric == "rmse") |>
inner_join(best_lasso, by = "penalty") |>
mutate(model = "lasso")
) |>
select(model, penalty, mean, std_err)# A tibble: 2 × 4
model penalty mean std_err
<chr> <dbl> <dbl> <dbl>
1 ridge 0.0000000001 6.53 0.412
2 lasso 0.0189 6.53 0.412
The LASSO model selected a set of nine predictors. Remittance intensity is negatively associated with GDP per capita and macroeconomic instability, while unemployment, deportations, and internet access exhibit positive relationships, consistent with counter-cyclical and transaction-cost mechanisms.
We use this model a regularizaiton model, to reduce variance error by reducing the important of less important predictors. It is a bagging algorithm which considers each split and divides it into only useful predictors - It uses two hyper paramaters mtry which considers x predictors in each split (it can be tuned to optimal value of useful predictors. and min_n to stop spliting the data
library(vip)
# For missingness inside the dependent variable
remit_train_clean <- remit_train |>
filter(!is.na(remittances_gdp))
# Smaller CV for run time optimization
rf_folds <- vfold_cv(remit_train_clean, v = 5)
## Given the high degree of missingness a recipe that accounts for nas will avoid it from breaking down using median imputation.
recipe_alt <-
recipe(remittances_gdp ~ stock + gdp_per + unemployment + dist_cap + terror + deportations + internet + inflation + country_name,
data = remit_train_clean) |>
update_role(country_name, new_role = "id") |>
step_impute_median(all_numeric_predictors()) |>
step_lag(gdp_per, unemployment, lag = 1) |>
step_mutate(
gdp_per = log(gdp_per + 1)) |>
step_normalize(all_numeric_predictors())
bake(prep(recipe_alt, training = remit_train_clean), new_data = remit_train_clean)# A tibble: 3,292 × 12
stock gdp_per unemployment dist_cap terror deportations internet inflation
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 0.830 1.34 0.552 -0.522 -0.300 -0.147 1.01 -0.102
2 0.0636 0.681 -0.0903 -0.500 1.52 -0.0455 0.495 -0.0379
3 -0.162 -0.0167 2.68 0.921 -0.300 -0.163 -0.852 -0.0473
4 7.78 0.348 -0.941 -1.54 0.610 8.07 -0.897 -0.0118
5 -0.162 -0.837 -0.952 -0.143 -0.300 -0.160 -0.130 -0.0870
6 -0.162 -0.353 2.64 0.921 0.610 -0.163 -0.952 -0.101
7 -0.235 1.92 -0.618 -0.553 -1.21 -0.163 1.92 -0.114
8 -0.162 -0.950 1.65 1.28 0.610 -0.163 0.429 -0.0607
9 -0.232 -0.262 -0.977 1.09 -0.300 -0.163 0.214 -0.0826
10 -0.0648 -1.05 -0.622 -1.57 -0.300 -0.157 -1.06 0.0591
# ℹ 3,282 more rows
# ℹ 4 more variables: country_name <chr>, remittances_gdp <dbl>,
# lag_1_gdp_per <dbl>, lag_1_unemployment <dbl>
## Creating a Random Forest Model set up for tuning
rf_mod <- rand_forest(
trees = tune(),
mtry = tune(),
min_n = tune()) |>
set_mode(mode = "regression") |>
set_engine(engine = "ranger",
importance = "impurity",
num.threads = 4)
## Creating a workflow. Need to use alternative specific because it accounts for missingness which will lead the model to fail.
rf_wf <- workflow() |>
add_recipe(recipe_alt) |>
add_model(rf_mod)
## Finalize parameters
rf_params <- rf_wf |>
extract_parameter_set_dials() |>
finalize(remit_train_clean)
rf_params
## tuning grid
rf_grid <- grid_max_entropy(
rf_params,
size = 20 )
## Tuning it within cross validation using our hyperparamters.
rf_tuned <- rf_wf |>
tune_grid(
resamples = rf_folds,
grid = rf_grid,
control = control_grid(save_pred = TRUE))
## Measuring the RMSE
rf_tuned |>
collect_metrics()# A tibble: 40 × 9
mtry trees min_n .metric .estimator mean n std_err .config
<int> <int> <int> <chr> <chr> <dbl> <int> <dbl> <chr>
1 4 399 4 rmse standard 3.33 5 0.159 Preprocessor1_Model…
2 4 399 4 rsq standard 0.810 5 0.0139 Preprocessor1_Model…
3 7 1769 5 rmse standard 3.09 5 0.153 Preprocessor1_Model…
4 7 1769 5 rsq standard 0.829 5 0.00980 Preprocessor1_Model…
5 10 1079 32 rmse standard 3.79 5 0.214 Preprocessor1_Model…
6 10 1079 32 rsq standard 0.728 5 0.0137 Preprocessor1_Model…
7 9 933 6 rmse standard 3.04 5 0.127 Preprocessor1_Model…
8 9 933 6 rsq standard 0.831 5 0.00614 Preprocessor1_Model…
9 10 1963 30 rmse standard 3.75 5 0.213 Preprocessor1_Model…
10 10 1963 30 rsq standard 0.734 5 0.0132 Preprocessor1_Model…
# ℹ 30 more rows
rf_tuned |>
show_best(metric = "rmse", n = 10)# A tibble: 10 × 9
mtry trees min_n .metric .estimator mean n std_err .config
<int> <int> <int> <chr> <chr> <dbl> <int> <dbl> <chr>
1 17 1011 5 rmse standard 3.01 5 0.120 Preprocessor1_Model…
2 9 933 6 rmse standard 3.04 5 0.127 Preprocessor1_Model…
3 15 1970 7 rmse standard 3.05 5 0.128 Preprocessor1_Model…
4 7 1769 5 rmse standard 3.09 5 0.153 Preprocessor1_Model…
5 16 119 10 rmse standard 3.12 5 0.140 Preprocessor1_Model…
6 19 859 16 rmse standard 3.33 5 0.177 Preprocessor1_Model…
7 4 399 4 rmse standard 3.33 5 0.159 Preprocessor1_Model…
8 11 1520 18 rmse standard 3.40 5 0.188 Preprocessor1_Model…
9 12 500 20 rmse standard 3.47 5 0.187 Preprocessor1_Model…
10 19 1767 21 rmse standard 3.51 5 0.201 Preprocessor1_Model…
## selecting the best specification and fit it to the full training data.
best_rf <-rf_tuned |>
select_best(metric = "rmse")
final_rf_wf <- rf_wf |>
finalize_workflow(best_rf)
final_rf_fit <- final_rf_wf |>
fit(data = remit_train_clean)
# Variable importance
final_rf_fit |>
extract_fit_parsnip() |>
vip(num_features = 10)KNN predicts remittances by averaging the values of the K most similar country-year observations.
## Create lagged variables by country
remit_train_lagged <- remit_train |>
arrange(country_name, year) |>
group_by(country_name) |>
mutate(
gdp_lag = lag(gdp_per),
deportations_lag = lag(deportations)
) |>
ungroup()
## Remove missing values
remit_train_clean_knn <- remit_train_lagged |>
select(remittances_gdp, stock, gdp_per, unemployment, dist_cap,
terror, deportations, internet, inflation, gdp_lag, deportations_lag) |>
drop_na()
## Cross-validation setup
set.seed(20251211)
knn_folds <- vfold_cv(data = remit_train_clean_knn, v = 10, repeats = 5)
## Recipe: log transform and normalize
recipe_knn <-
recipe(remittances_gdp ~ ., data = remit_train_clean_knn) |>
step_mutate(gdp_per = log(gdp_per + 1)) |>
step_normalize(all_numeric_predictors())
## Model: KNN with tunable K
knn_mod <-
nearest_neighbor(neighbors = tune()) |>
set_engine("kknn") |>
set_mode("regression")
## Grid: test K from 1 to 99
knn_grid <- grid_regular(neighbors(range = c(1, 99)), levels = 10)
## Workflow
knn_workflow <-
workflow() |>
add_recipe(recipe_knn) |>
add_model(knn_mod)
## Tune
knn_results <-
knn_workflow |>
tune_grid(
resamples = knn_folds,
grid = knn_grid,
metrics = metric_set(rmse, rsq)
)
## Results
knn_results |> collect_metrics()# A tibble: 20 × 7
neighbors .metric .estimator mean n std_err .config
<int> <chr> <chr> <dbl> <int> <dbl> <chr>
1 1 rmse standard 3.20 50 0.119 Preprocessor1_Model01
2 1 rsq standard 0.746 50 0.0140 Preprocessor1_Model01
3 11 rmse standard 3.08 50 0.0718 Preprocessor1_Model02
4 11 rsq standard 0.744 50 0.0108 Preprocessor1_Model02
5 22 rmse standard 3.32 50 0.0609 Preprocessor1_Model03
6 22 rsq standard 0.710 50 0.0104 Preprocessor1_Model03
7 33 rmse standard 3.56 50 0.0627 Preprocessor1_Model04
8 33 rsq standard 0.674 50 0.0105 Preprocessor1_Model04
9 44 rmse standard 3.76 50 0.0652 Preprocessor1_Model05
10 44 rsq standard 0.639 50 0.0107 Preprocessor1_Model05
11 55 rmse standard 3.91 50 0.0669 Preprocessor1_Model06
12 55 rsq standard 0.610 50 0.0109 Preprocessor1_Model06
13 66 rmse standard 4.04 50 0.0683 Preprocessor1_Model07
14 66 rsq standard 0.585 50 0.0111 Preprocessor1_Model07
15 77 rmse standard 4.15 50 0.0696 Preprocessor1_Model08
16 77 rsq standard 0.564 50 0.0112 Preprocessor1_Model08
17 88 rmse standard 4.23 50 0.0706 Preprocessor1_Model09
18 88 rsq standard 0.546 50 0.0113 Preprocessor1_Model09
19 99 rmse standard 4.31 50 0.0714 Preprocessor1_Model10
20 99 rsq standard 0.532 50 0.0112 Preprocessor1_Model10
knn_results |> show_best(metric = "rmse")# A tibble: 5 × 7
neighbors .metric .estimator mean n std_err .config
<int> <chr> <chr> <dbl> <int> <dbl> <chr>
1 11 rmse standard 3.08 50 0.0718 Preprocessor1_Model02
2 1 rmse standard 3.20 50 0.119 Preprocessor1_Model01
3 22 rmse standard 3.32 50 0.0609 Preprocessor1_Model03
4 33 rmse standard 3.56 50 0.0627 Preprocessor1_Model04
5 44 rmse standard 3.76 50 0.0652 Preprocessor1_Model05
knn_results |> autoplot()## Select best
best_k <- select_best(knn_results, metric = "rmse")
final_knn_wf <- knn_workflow |> finalize_workflow(best_k)## Compare all models
tibble(
Model = c("OLS", "Ridge", "LASSO", "Random Forest", "KNN"),
Error = c(
collect_metrics(lm_results) |> filter(.metric == "rmse") |> pull(mean),
collect_metrics(ridge_res) |> filter(.metric == "rmse") |> inner_join(best_ridge, by = "penalty") |> pull(mean),
collect_metrics(lasso_res) |> filter(.metric == "rmse") |> inner_join(best_lasso, by = "penalty") |> pull(mean),
collect_metrics(rf_tuned) |> filter(.metric == "rmse") |> slice_min(mean) |> pull(mean),
collect_metrics(knn_results) |> filter(.metric == "rmse") |> slice_min(mean) |> pull(mean)
)
) |>
arrange(Error)# A tibble: 5 × 2
Model Error
<chr> <dbl>
1 Random Forest 3.01
2 KNN 3.08
3 LASSO 6.53
4 Ridge 6.53
5 OLS 6.54
tibble(
Model = c("OLS", "Ridge", "LASSO", "Random Forest", "KNN"),
Error = c(
collect_metrics(lm_results) |> filter(.metric == "rmse") |> pull(mean),
collect_metrics(ridge_res) |> filter(.metric == "rmse") |> inner_join(best_ridge, by = "penalty") |> pull(mean),
collect_metrics(lasso_res) |> filter(.metric == "rmse") |> inner_join(best_lasso, by = "penalty") |> pull(mean),
collect_metrics(rf_tuned) |> filter(.metric == "rmse") |> slice_min(mean) |> pull(mean),
collect_metrics(knn_results) |> filter(.metric == "rmse") |> slice_min(mean) |> pull(mean)
)
) |>
ggplot(aes(x = Error, y = reorder(Model, -Error))) +
geom_point(size = 4, color = "steelblue") +
geom_segment(aes(x = 0, xend = Error, y = Model, yend = Model), color = "gray70") +
labs(title = "Model Performance",
subtitle = "Left is better",
x = "Prediction Error",
y = NULL) +
theme_minimal()The best model is Random Forest (lowest error: 3.06). Now we test it on completely new data.
## Create new split for Random Forest (uses clean data)
set.seed(20251211)
rf_split <- initial_split(remit_train_clean, prop = 0.8)
## Test Random Forest
test_results <- final_rf_wf |> last_fit(rf_split)
## Show results
test_results |> collect_metrics()# A tibble: 2 × 4
.metric .estimator .estimate .config
<chr> <chr> <dbl> <chr>
1 rmse standard 2.49 Preprocessor1_Model1
2 rsq standard 0.833 Preprocessor1_Model1
On average, predictions are off by 2.53 percentage points. The model explains 86% of the variation in remittances
## Visualize: Actual vs Predicted
test_results |>
collect_predictions() |>
ggplot(aes(x = remittances_gdp, y = .pred)) +
geom_point(alpha = 0.5, color = "steelblue") +
geom_abline(slope = 1, intercept = 0, linetype = "dashed", color = "red") +
labs(
title = "Random Forest: Test Set Performance",
subtitle = "Points near line = good predictions",
x = "Actual Remittances (% GDP)",
y = "Predicted"
) +
theme_minimal()Random Forest performs much better than the other models!
Earlier in the modeling process we explored 6 countries of interest, that represent either increasingly important remittance receiving countries, who would most be impact be changes in US migration policy (deportations, overall changes in migrant stock, etc) or who have throughout the duration of our model been important players.
We want to now apply to these 6 (plus mexico) to see, on average, how well our model responds to the predictors in particular for countries in Latin America (and India) who have widely been cited as central components of contemporary migration/ foreign development conversations.
# Collect predictions from last_fit or resamples
country_preds <- test_results |>
collect_predictions() |>
left_join(remit_train |>
mutate(row_id = row_number()) |>
select(row_id, country_name, year),
by = c(".row" = "row_id"))
# Respecifying our 5 countries of interest and their predictions
countries_of_interest <- c("Nicaragua", "El Salvador", "Honduras",
"Guatemala", "Haiti", "India", "Mexico")
rmse_tbl <- country_preds |>
filter(country_name %in% countries_of_interest) |>
group_by(country_name) |>
summarise(
Error = rmse_vec(truth = remittances_gdp, estimate = .pred),
Avg_Remittances_GDP = mean(remittances_gdp, na.rm = TRUE),
.groups = "drop" ) |>
arrange(Error)
rmse_tbl# A tibble: 7 × 3
country_name Error Avg_Remittances_GDP
<chr> <dbl> <dbl>
1 Mexico 0.288 2.84
2 India 0.619 2.68
3 Honduras 1.14 10.2
4 Haiti 1.75 16.1
5 Nicaragua 2.41 9.87
6 El Salvador 4.35 20.2
7 Guatemala 5.38 10.4
Key Insights
# Compute residuals (Predicted - Actual)
residuals_df <- country_preds %>% filter(country_name %in% countries_of_interest) %>%
mutate(residual = .pred - remittances_gdp)
# Plot residuals over time
ggplot(residuals_df, aes(x = year, y = residual)) +
geom_point(alpha = 0.6, color = "steelblue") +
geom_smooth(alpha = 0.4, color = "steelblue") +
geom_hline(yintercept = 0, linetype = "dashed", color = "red") +
facet_wrap(~country_name, scales = "free_y") +
labs(title = "Residuals Over Time by Country",
x = "Year",
y = "Residual (Predicted - Actual)") +
theme_minimal()Some countries show consistent bias (underprediction in more reliant/ more accurate predictions in relatively less reliant countries on remittances).
Volatility spikes around 2008 and 2020 suggest the model struggles during economic shocks.
India and Mexico show stable residuals, reinforcing their reliability and increasing our ability to make policy recommendations in these context.
Overall, the model’s interpretability would be advised with caution to policymakers in the most-reliant on remittances countries, but may provide use to larger countries (India/ Mexico).
Comment
Cross-validation selected a non-zero penalty for the LASSO model, indicating that it improves predictive performance. The LASSO regularization path shows a small set of predictors entering the model as the penalty decreases, highlighting its role as a variable selection method.